-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lp-gateway: Add multi-router support #1948
Conversation
return (Err(Error::<T>::NotEnoughRouters.into()), weight); | ||
} | ||
|
||
let expected_proof_count: u32 = (routers_count - 1) as u32; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @hieronx, was wondering if the solidity side already has (testing) support for sending these message proofs. This will be necessary for our integration tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes Solidity has full implementation of the message proofs already. As well as initiate/dispute message recovery.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left some comments but I think the idea is the correct one!
match InboundMessageProofCount::<T>::try_mutate(message_proof, |count| { | ||
*count += 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could the same router send the proof 3 times and we will count 3?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this current state, yes. Especially given that we can get the same message more than once.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added some tests with different nonproof/proof messages to ensure that these storages remain sane, please let me know what you think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll check in the next review pass!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the overall flow is the correct one, but I think there are some math issues on it <- it was ok!
let _ = InboundMessages::<T>::clear(u32::MAX, None); | ||
let _ = InboundMessageProofCount::<T>::clear(u32::MAX, None); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can exceed the block weight here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, should we maybe add a flag that marks this storage for clearing and then attempt to clear this on_idle
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is when the session_id
has to the rescue. on_idle
should be as small as possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I will wrap up the normal inbound message processing logic to make sure I get it right then I can do another pass to improve storages etc.
return (Ok(()), weight); | ||
} | ||
|
||
if current_message_proof_count == total_expected_proof_count { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cdamian, just one important thing. How here do we know that the counts belong to different routers and not from the same router? I think this is where the concept of "vote" comes in
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's the fun part, we do not...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if we can use the domain_address though. If it comes via EVM it's most likely the same sender - the gateway router contract deployed on evm. Or maybe I'm mixing things up...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It appears to me that we cannot rely on the domain_address
based on our Slack conversation. We need to assume that in the future each domain has more than one configured router. So I would derive it "special origin" for the router, e.g. in case of Axelar axelar_gateway_precompile::Pallet::<T>::GatewayContract
(see my slack message).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good! Of course we need to iron out open questions and apply our current Slack conversation. Really nice work with the UTs!
pub fn hash(&self) -> T::Hash { | ||
BlakeTwo256::hash(self.evm_chain.encode().as_slice()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to be sure: The router hash is solely used locally and not as part of any dispatched message right? Asking because Solidity usually uses keccak256.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is mainly for the LP gateway to map a router hash to a router in storage.
let mut router_hashes = Vec::new(); | ||
|
||
for router in &routers { | ||
router.init().map_err(|_| Error::<T>::RouterInitFailed)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: It would be great to extend the RouterInitFailed
error to have some payload such as the router hash. Otherwise it will be hard to know which router caused a failure when running into issues during mutli router setup. E.g.
pub enum Error<T> {
/// Router initialization failed.
RouterInitFailed { router: T::Hash },
// snip
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome suggestion. thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't know substrate supported custom content in the error; if it works, amazing!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Substrate did not support this from Genesis but IIRC it was added early last year. However, since we have lacked behind Substrate versions so much, we haven't utilised it yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems we can add other PalletError
s in there - https://github.com/paritytech/polkadot-sdk/blob/149c70938f2b29f8d92ba1cc952aeb63d4084e27/bridges/snowbridge/pallets/inbound-queue/src/lib.rs#L185.
} | ||
|
||
fn get_expected_message_proof_count() -> u32 { | ||
T::MultiRouterCount::get() - 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, we must use safe math as long as we cannot restrict MultiRouterCount
to be at least one.
T::MultiRouterCount::get() - 1 | |
T::MultiRouterCount::get().saturating_sub(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, most of the logic for inbound messages is super-dirty, will change all of these once I figure out proof processing accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes absolute sense. Better have something working with some dirt than have a shining non-functional beauty 😅
match InboundMessageProofCount::<T>::try_mutate(message_proof, |count| { | ||
count.ensure_add_assign(1)?; | ||
|
||
Ok(*count) | ||
}) { | ||
Ok(r) => r, | ||
Err(e) => return Err(e), | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Am I missing something or can we simplify readability by utilising ?
?:
match InboundMessageProofCount::<T>::try_mutate(message_proof, |count| { | |
count.ensure_add_assign(1)?; | |
Ok(*count) | |
}) { | |
Ok(r) => r, | |
Err(e) => return Err(e), | |
}; | |
match InboundMessageProofCount::<T>::try_mutate(message_proof, |count| { | |
count.ensure_add_assign(1)?; | |
Ok(*count) | |
})?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might drop these fugly try_mutate
s.
return (Ok(()), weight); | ||
} | ||
|
||
if current_message_proof_count == total_expected_proof_count { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It appears to me that we cannot rely on the domain_address
based on our Slack conversation. We need to assume that in the future each domain has more than one configured router. So I would derive it "special origin" for the router, e.g. in case of Axelar axelar_gateway_precompile::Pallet::<T>::GatewayContract
(see my slack message).
@@ -544,12 +735,12 @@ pub mod pallet { | |||
/// weight for these operations in the `DispatchResultWithPostInfo`. | |||
fn process_outbound_message( | |||
sender: T::AccountId, | |||
domain: Domain, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Placeholder (might answer itself further down below): I am curios how or where we determine the destination domain instead now?
Too many conflicts, gonna start fresh from main. |
3a23458
to
80dbb03
Compare
…bound routers, session ID storage
…ound message processing logic
80dbb03
to
9be4c58
Compare
…ogic for removing invalid session IDs on idle
} | ||
} | ||
|
||
mod four_messages { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it works for 3, can we ensure it will always work for 4?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case, we are testing with 2 routers, and the minimum number of messages needed for a successful message submission is 2. So, if it works with 2, it should work with more messages.
Initially, I added tests for up to 4 messages to confirm that duplicate messages are processed accordingly. Still thinking about generating correct expected results so we can slim these tests down a bit.
Will close this and rebase on #1958 soon. |
Description
LP Gateway:
Multi-router
Added new extrinsic for adding multi-routers, and 2 new storages for multi routers:
Router
trait now has ahash()
method that allows us to create a hash using certain router fields.Inbound/outbound message handing:
Message proofs
The
LPEnconding
trait has 2 new methods used for:Outbound
The
OutboundMessageHandler
implementation is now queueing one message using one router and message proofs using the remaining routers.Inbound
Added 2 new storages for keeping track of inbound messages and message proofs.
The inbound message processing logic is checking that the required number of proofs are received before submitting the message to the
InboundMessageHandler
.Checklist: